76 research outputs found

    High-level feature detection from video in TRECVid: a 5-year retrospective of achievements

    Get PDF
    Successful and effective content-based access to digital video requires fast, accurate and scalable methods to determine the video content automatically. A variety of contemporary approaches to this rely on text taken from speech within the video, or on matching one video frame against others using low-level characteristics like colour, texture, or shapes, or on determining and matching objects appearing within the video. Possibly the most important technique, however, is one which determines the presence or absence of a high-level or semantic feature, within a video clip or shot. By utilizing dozens, hundreds or even thousands of such semantic features we can support many kinds of content-based video navigation. Critically however, this depends on being able to determine whether each feature is or is not present in a video clip. The last 5 years have seen much progress in the development of techniques to determine the presence of semantic features within video. This progress can be tracked in the annual TRECVid benchmarking activity where dozens of research groups measure the effectiveness of their techniques on common data and using an open, metrics-based approach. In this chapter we summarise the work done on the TRECVid high-level feature task, showing the progress made year-on-year. This provides a fairly comprehensive statement on where the state-of-the-art is regarding this important task, not just for one research group or for one approach, but across the spectrum. We then use this past and on-going work as a basis for highlighting the trends that are emerging in this area, and the questions which remain to be addressed before we can achieve large-scale, fast and reliable high-level feature detection on video

    Everyday concept detection in visual lifelogs: validation, relationships and trends

    Get PDF
    The Microsoft SenseCam is a small lightweight wearable camera used to passively capture photos and other sensor readings from a user's day-to-day activities. It can capture up to 3,000 images per day, equating to almost 1 million images per year. It is used to aid memory by creating a personal multimedia lifelog, or visual recording of the wearer's life. However the sheer volume of image data captured within a visual lifelog creates a number of challenges, particularly for locating relevant content. Within this work, we explore the applicability of semantic concept detection, a method often used within video retrieval, on the novel domain of visual lifelogs. A concept detector models the correspondence between low-level visual features and high-level semantic concepts (such as indoors, outdoors, people, buildings, etc.) using supervised machine learning. By doing so it determines the probability of a concept's presence. We apply detection of 27 everyday semantic concepts on a lifelog collection composed of 257,518 SenseCam images from 5 users. The results were then evaluated on a subset of 95,907 images, to determine the precision for detection of each semantic concept. We conduct further analysis on the temporal consistency, co-occurance and trends within the detected concepts to more extensively investigate the robustness of the detectors within this novel domain. We additionally present future applications of concept detection within the domain of lifelogging

    Insulin-like growth factor 2 (IGF2) protects against Huntington's disease through the extracellular disposal of protein aggregates

    Get PDF
    Impaired neuronal proteostasis is a salient feature of many neurodegenerative diseases, highlighting alterations in the function of the endoplasmic reticulum (ER). We previously reported that targeting the transcription factor XBP1, a key mediator of the ER stress response, delays disease progression and reduces protein aggregation in various models of neurodegeneration. To identify disease modifier genes that may explain the neuroprotective effects of XBP1 deficiency, we performed gene expression profiling of brain cortex and striatum of these animals and uncovered insulin-like growth factor 2 (Igf2) as the major upregulated gene. Here, we studied the impact of IGF2 signaling on protein aggregation in models of Huntington's disease (HD) as proof of concept. Cell culture studies revealed that IGF2 treatment decreases the load of intracellular aggregates of mutant huntingtin and a polyglutamine peptide. These results were validated using induced pluripotent stem cells (iPSC)-derived medium spiny neurons from HD patients and spinocerebellar ataxia cases. The reduction in the levels of mutant huntingtin was associated with a decrease in the half-life of the intracellular protein. The decrease in the levels of abnormal protein aggregation triggered by IGF2 was independent of the activity of autophagy and the proteasome pathways, the two main routes for mutant huntingtin clearance. Conversely, IGF2 signaling enhanced the secretion of soluble mutant huntingtin species through exosomes and microvesicles involving changes in actin dynamics. Administration of IGF2 into the brain of HD mice using gene therapy led to a significant decrease in the levels of mutant huntingtin in three different animal models. Moreover, analysis of human postmortem brain tissue and blood samples from HD patients showed a reduction in IGF2 level. This study identifies IGF2 as a relevant factor deregulated in HD, operating as a disease modifier that buffers the accumulation of abnormal protein species

    Multilocational Evaluation of Pigeonpea for Broad-Based Resistance to Fusarium Wilt in India

    Get PDF
    Nine-hundred and fifty-nine plgeonpea germplasm and breeding lines were evaluated for resistance to wilt caused by Fusarium udum Butler at 12 locations in India over a period of 7 years between 1984 and 1990. ICP 8863,9174,12745, ICPL333,8363,88047, BWR 370, DPPA 85-2,85-3,85-8,85–13,85–14 and Bandapalera were resistant or moderately resistant at 7 to 10 out of 12 locations for 3 to 5 years with an average wilt incidence of less than 15%

    A framework for cloud-based context-aware information services for citizens in smart cities

    Get PDF
    © 2014 Khan et al.; licensee Springer. Background: In the context of smart cities, public participation and citizen science are key ingredients for informed and intelligent planning decisions and policy-making. However, citizens face a practical challenge in formulating coherent information sets from the large volumes of data available to them. These large data volumes materialise due to the increased utilisation of information and communication technologies in urban settings and local authorities’ reliance on such technologies to govern urban settlements efficiently. To encourage effective public participation in urban governance of smart cities, the public needs to be facilitated with the right contextual information about the characteristics and processes of their urban surroundings in order to contribute to the aspects of urban governance that affect them such as socio-economic activities, quality of life, citizens well-being etc. The cities on the other hand face challenges in terms of crowd sourcing with quality data collection and standardisation, services inter-operability, provisioning of computational and data storage infrastructure. Focus: In this paper, we highlight the issues that give rise to these multi-faceted challenges for citizens and public administrations of smart cities, identify the artefacts and stakeholders involved at both ends of the spectrum (data/service producers and consumers) and propose a conceptual framework to address these challenges. Based upon this conceptual framework, we present a Cloud-based architecture for context-aware citizen services for smart cities and discuss the components of the architecture through a common smart city scenario. A proof of concept implementation of the proposed architecture is also presented and evaluated. The results show the effectiveness of the cloud-based infrastructure for the development of a contextual service for citizens

    Stress assessment in captive greylag geese (<em>Anser anser</em>)

    Get PDF
    Chronic stress—or, more appropriately, “allostatic overload”—may be physiologically harmful and can cause death in the most severe cases. Animals in captivity are thought to be particularly vulnerable to allostatic overload due to artificial housing and group makeup. Here we attempted to determine if captive greylag geese (Anser anser), housed lifelong in captivity, showed elevated levels of immunoreactive corticosterone metabolites (CORT) and ectoparasites in dropping samples as well as some hematological parameters (hematocrit, packed cell volume, total white blood cell count [TWBC], and heterophil:lymphocyte ratio [H:L]). All of these have been measured as indicators of chronic stress. Furthermore, we correlated the various stress parameters within individuals. Captive geese showed elevated values of CORT and ectoparasites relative to a wild population sampled in the vicinity of the area where the captive flock is held. The elevated levels, however, were by no means at a pathological level and fall well into the range of other published values in wild greylag geese. We found no correlations between any of the variables measured from droppings with any of the ones collected from blood. Among the blood parameters, only the H:L negatively correlated with TWBC. We examine the problem of inferring allostatic overload when measuring only 1 stress parameter, as there is no consistency between various measurements taken. We discuss the different aspects of each of the parameters measured and the extensive individual variation in response to stress as well as the timing at which different systems respond to a stressor and what is actually measured at the time of data collection. We conclude that measuring only 1 stress parameter often is insufficient to evaluate the well-being of both wild and captively housed animals and that collecting behavioral data on stress might be a suitable addition
    • 

    corecore